56 research outputs found
Recommended from our members
Multiparadigm data structures in Leda
Multiparadigm programming is a term used to describe a style of software development that makes use of facilities originally designed in support of a number of different programming language paradigms. In this paper we illustrate our conception of multiparadigm programming, by describing how various data structures can be implemented in the programming language Leda. Leda is a strongly-typed compiled multiparadigm programming language that we have been developing over the past several years. Our exposition serves both to illustrate the idea of multiparadigm programming, and to describe the features of the language Leda
Recommended from our members
A parallel intermediate representation based on Lambda expressions
The lambda calculus has frequently been used as an intermediate representation for programming languages, particularly for functional programming language systems. We introduce two simple extensions to the lambda calculus that describe potentially parallel computations. These extensions permit us to use the lambda calculus as an intermediate form for languages that operate on large data items as single entities, such as FP or APL. We conclude by discussing how this intermediate representation can facilitate the generation of code for different types of parallel systems
Recommended from our members
Composition and compilation in functional programming languages
Functional programming languages, such as Backus' FP, and high level expression oriented languages, such as APL, are examples of programming languages in which the primary method of program construction is the process of composition. In this paper we describe an approach to generating code for languages based on compositions. The approach involves finding an intermediate representation which grows in size very slowly as additional terms are composed. In particular, the size of the intermediate representation of a composed object should be considerably smaller, and easier to interpret, than the sum of the sizes of the internal representations of the individual elements. We illustrate this technique by showing how to generate conventional code for Backus' language FP. The general technique, however, is applicable to other languages, as well as other architectures
Recommended from our members
LEDA : a blending of imperative and relational programming
This paper describes features of a new strongly typed, compiled programming language, called LEDA. LEDA attempts to combine aspects of both imperative (value-oriented) and logical (relation-oriented) styles of programming. Logical behavior is introduced by means of a new programming structure, called a relation. Relations have similarities to functions and procedures, but are distinct from both. Several examples are presented illustrating how the combination of features from the two paradigms are mutually beneficial. Finally a short overview of the implementation is given
Recommended from our members
Multiparadigm extensions to Java
In 1995 my students and I developed Leda, a multiparadigm language based on the Pascal model. Leda allowed programmers to create abstractions in an object-oriented, functional, or logic programming style. More recently we have been interested in recreating this work, but this time using Java as the language basis. The objective to to add as few new operations as possible, and to make these operations seem as close to Java as possible, so that they seem to fit naturally into the language. To date we have proposed facilities for breaking apart composed objects (sometimes called unboxing), for functions as first-class values, for pass-by-name parameters, and for relational (or logic) programming
Recommended from our members
Approximation algorithms for solving cost observable Markov decision processes
"The specifi c problem addressed in this proposal is the development of
good approximation algorithms for solving problems that have partial observability. The model we propose associates costs with obtaining information about the current state. We want to predict when and how much it is necessary to observe. We want to use our Cost Observable Markov Decision Process (COMDP) model to find good solutions for real-world problems ..."--Problem definition
Recommended from our members
NET PROPHET : McCulloch and developments from his neural net model
"Fifty years after the pioneering work of McCulloch and Pitts, the study of neural nets is alive and active. In this paper, I have discussed some of the work that is of current interest to me and my co-workers. I would, perhaps, be remiss if I failed to mention some of the current hype about neural nets. Can neural nets quickly solve NP-complete problems? No. A look at the proposed nets will show that the question of whether the net will converge, or where the net will converge to, are as difficult as the original NP-complete problem. This does not prevent the neural net from giving an approximate solution to a hard optimization problem, but no one has yet proven any approximation bounds. Hard problems are only hard in the worst case, so there may be many easy instances of a hard problem. Nothing prevents a neural net from solving these easy instances quickly. Can analog neural nets compute things not computable a Turing machine? Yes. But any analog device with infinite precision has more computational power than a Turing machine, so a neural net with unlimited precision should be a very powerful device. But practically all devices are constructed with limited precision, and these limited precision devices have no more power than a Turing machine. Can neural nets compute faster than other parallel models? No. Neural nets are in fact equivalent to the usual parallel models. The only difference that can occur is if the neural net has infinite precision which as mentioned above is highly unlikely. Does learning in neural nets make programming unnecessary? No. As we saw in the discussion of learning, learning rules must be devised, and it seems that different learning tasks will require different learning rules. Further, the kind of net to use for a particular task will be an important decision. In our decoding example, some network topologies did not lead to good decoders, while other topologies did. Neural nets will not replace programmers, but give programmers another paradigm in which to program. In spite of the hype, I believe that neural nets will be useful both as biological models and as programming paradigms. Finally, according to an often-told tale, there was a golden age of neural nets which suddenly ended in 1970. Depending on the version of the tale, the golden age ended because of the Vietnam war, or Minsky and Papert's book on perceptions, or cuts in funding, or the rise of artificial intelligence. But I hope that the reader of this paper and the rest of this volume will see that the death of Warren McCulloch had a most profound effect on the field. We miss him as a brilliant scientist, as a warm human being, and as the greatest story-teller of our age."--Conclusion
Recommended from our members
Learning and reasoning
What is the relationship between learning and reasoning? Much recent work in machine learning has been criticized for focusing on learning and ignoring reasoning. This paper attempts to describe the various ways in which machine learning research has (and has not) incorporated reasoning. The paper argues that there are important computational, statistical, and engineering constraints that have produced the current state of affairs. These reasons are reviewed and assessed in the light of future research directions
Recommended from our members
Parallel I/O requirements of four oceanography applications
Brief descriptions of the I/O requirements for four production oceanography programs running at Oregon State University are presented. The applications all rely exclusively on array-oriented, sequential file operations. Persistent files are used for checkpointing and movie making, while temporary files are used to store out-of-core data
Recommended from our members
Proposed metrics for transfer learning
Summary: Four proposed metrics:
[1] average relative reduction in training time (sample size, number of training experiences)
[2] jumpstart (initial advantage of transfer algorithm)
[3] handicap (how long it takes the no-transfer algorithm to overcome the jumpstart)
[4] asymptotic advantage (how much better the transfer learning algorithm does in the limit of large sample sizes)Version
- …